152 research outputs found

    MPGD's spatial and energy resolution studies with an adjustable point-like electron source

    No full text
    11th Vienna Conference on Instrumentation (February 2007) , to appear in the Proceedings (NIM A)International audienceMicropattern Gaseous Detectors (MPGD), like Micromegas or GEM, are used or foreseen in particle physics experiments for which a very good spatial resolution is required. We have developed an experimental method to separate the contribution of the transverse diffusion and the multiplication process by varying the number of primary electrons generated by a point-like source. A pulsed nitrogen laser is focused by an optical set-up on the drift electrode which is made of a thin metal layer deposited on a quartz lamina. The number of primary electrons can be adjusted from a few to several thousands on a spot which transverse size is less than 100μm100 \mu m RMS. The detector can be positioned with an accuracy of 1μm1\mu m by a motorized three dimensional system. This method was applied to a small Micromegas detector with a gain set between 10310^3 and 2.1042.10^4 and an injection of 60 to 2000 photoelectrons. Spatial resolutions as small as 5μm5\mu m were measured with 2000 primary electrons. An estimation of the upper limit of the relative gain variance can be obtained from the measurements

    Sub-Riemannian Fast Marching in SE(2)

    Full text link
    We propose a Fast Marching based implementation for computing sub-Riemanninan (SR) geodesics in the roto-translation group SE(2), with a metric depending on a cost induced by the image data. The key ingredient is a Riemannian approximation of the SR-metric. Then, a state of the art Fast Marching solver that is able to deal with extreme anisotropies is used to compute a SR-distance map as the solution of a corresponding eikonal equation. Subsequent backtracking on the distance map gives the geodesics. To validate the method, we consider the uniform cost case in which exact formulas for SR-geodesics are known and we show remarkable accuracy of the numerically computed SR-spheres. We also show a dramatic decrease in computational time with respect to a previous PDE-based iterative approach. Regarding image analysis applications, we show the potential of considering these data adaptive geodesics for a fully automated retinal vessel tree segmentation.Comment: CIARP 201

    Generalized Forward-Backward Splitting

    Full text link
    This paper introduces the generalized forward-backward splitting algorithm for minimizing convex functions of the form F+i=1nGiF + \sum_{i=1}^n G_i, where FF has a Lipschitz-continuous gradient and the GiG_i's are simple in the sense that their Moreau proximity operators are easy to compute. While the forward-backward algorithm cannot deal with more than n=1n = 1 non-smooth function, our method generalizes it to the case of arbitrary nn. Our method makes an explicit use of the regularity of FF in the forward step, and the proximity operators of the GiG_i's are applied in parallel in the backward step. This allows the generalized forward backward to efficiently address an important class of convex problems. We prove its convergence in infinite dimension, and its robustness to errors on the computation of the proximity operators and of the gradient of FF. Examples on inverse problems in imaging demonstrate the advantage of the proposed methods in comparison to other splitting algorithms.Comment: 24 pages, 4 figure

    Signification géodynamique des calcaires de plate-forme en cours de subduction sous l'arc des Nouvelles-Hébrides (Sud-Ouest de l'océan Pacifique)

    Get PDF
    Note présentée par Jean DercourtInternational audienceThe analysis of carbonates from New Hébrides Trench shows that three main épisodes of shallow water carbonate déposition occurred during Late Eocene,Late Oligocene-Early Miocène,Mio-Pliocene-Quaternary, controlled by eustatism and tectonic.L'analyse de carbonates issus de la fosse des Nouvelles-Hébrides a permis de reconnaître trois périodes favorables au développement de plates-formes(Éocène supérieur,Oligocène supérieur-Miocène inférieur,Mio-Pliocène-Quaternaire)contrôlé par l'eustatisme et la tectonique

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Single electron response and energy resolution of a Micromegas detector

    Full text link
    Micro-Pattern Gaseous Detectors (MPGDs) such as Micromegas or GEM are used in particle physics experiments for their capabilities in particle tracking at high rates. Their excellent position resolutions are well known but their energy characteristics have been less studied. The energy resolution is mainly affected by the ionisation processes and detector gain fluctuations. This paper presents a method to separetely measure those two contributions to the energy resolution of a Micromegas detector. The method relies on the injection of a controlled number of electrons. The Micromegas has a 1.6-mm drift zone and a 160-μ\mum amplification gap. It is operated in Ne 95%-iC4\mathrm{_4}H10\mathrm{_{10}} 5% at atmospheric pressure. The electrons are generated by non-linear photoelectric emission issued from the photons of a pulsed 337-nm wavelength laser coupled to a focusing system. The single electron response has been measured at different gains (3.7 104\mathrm{^4}, 5.0 104\mathrm{^4} and 7.0 104\mathrm{^4}) and is fitted with a good agreement by a Polya distribution. From those fits, a relative gain variance of 0.31±\pm0.02 is deduced. The setup has also been characterised at several voltages by fitting the energy resolution measured as a function of the number of primary electrons, ranging from 5 up to 210. A maximum value of the Fano factor (0.37) has been estimated for a 5.9 keV X-rays interacting in the Ne 95%-iC4\mathrm{_4}H10\mathrm{_{10}} 5% gas mixture.Comment: Preprint submitted to Nuclear Instrumentation and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment; Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment in press (2009

    PMm2: large photomultipliers and innovative electronics for the next-generation neutrino experiments

    Full text link
    The next generation of proton decay and neutrino experiments, the post-SuperKamiokande detectors as those that will take place in megaton size water tanks, will require very large surfaces of photodetection and a large volume of data. Even with large hemispherical photomultiplier tubes, the expected number of channels should reach hundreds of thousands. A funded R&D program to implement a solution is presented here. The very large surface of photodetection is segmented in macro pixels made of 16 hemispherical (12 inches) photomultiplier tubes connected to an autonomous front-end which works on a triggerless data acquisition mode. The expected data transmission rate is 5 Mb/s per cable, which can be achieved with existing techniques. This architecture allows to reduce considerably the cost and facilitate the industrialization. This document presents the simulations and measurements which define the requirements for the photomultipliers and the electronics. A proto-type of front-end electronics was successfully tested with 16 photomultiplier tubes supplied by a single high voltage, validating the built-in gain adjustment and the calibration principle. The first tests and calculations on the photomultiplier glass led to the study of a new package optimized for a 10 bar pressure in order to sustain the high underwater pressure.Comment: 1 pdf file, 4 pages, 4 figures, NDIP08, submitted to Nucl. Instr. and Meth. Phys. Res.

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    corecore